Goto

Collaborating Authors

 speech online



A.I. amplifies 'help speech' to fight hate speech online - Futurity

#artificialintelligence

You are free to share this article under the Attribution 4.0 International license. A new system leverages artificial intelligence to rapidly analyze hundreds of thousands of comments on social media and identify the fraction that defend or sympathize with disenfranchised minorities such as the Rohingya community. The Rohingyas began fleeing Myanamar in 2017 to avoid ethnic cleansing. Human social media moderators, who couldn't possibly manually sift through so many comments, would then have the option to highlight this "help speech" in comment sections. "Even if there's lots of hateful content, we can still find positive comments," says Ashiqur R. KhudaBukhsh, a postdoctoral researcher in the Language Technologies Institute (LTI) at Carnegie Mellon University who conducted the research with alumnus Shriphani Palakodety.


Oh dear... AI models used to flag hate speech online are, er, racist against black people

#artificialintelligence

The internet is filled with trolls spewing hate speech, but machine learning algorithms can't help us clean up the mess. A paper from computer scientists from the University of Washington, Carnegie Mellon University, and the Allen Institute for Artificial Intelligence, found that machines were more likely to flag tweets from black people than white people as offensive. It all boils down to the subtle differences in language. African-American English (AAE), often spoken in urban communities, is peppered with racial slang and profanities. But even if they contain what appear to be offensive words, the message itself often isn't abusive.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

#artificialintelligence

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep-learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100%, similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.


Fighting Words Not Ideas: Google's New AI-Powered Toxic Speech Filter Is The Right Approach

Forbes - Tech

Alphabet Jigsaw (formerly Google Ideas) officially unveiled this morning their new tool for fighting toxic speech online, appropriately called Perspective. Powered by a deep learning model trained on more than 17 million manually reviewed reader comments provided by the New York Times, the model assigns a score to a given passage of text, rating it on a scale from 0 to 100% similar to statements that human reviewers have previously rated as "toxic." What makes this new approach from Google so different than past approaches is that it largely focuses on language rather than ideas: for the most part you can express your thoughts freely and without fear of censorship as long as you express them clinically and clearly, while if you resort to emotional diatribes and name calling, regardless of what you talk about, you will be flagged. What does this tell us about the future of toxic speech online and the notion of machines guiding humans to a more "perfect" humanity? One of the great challenges in filtering out "toxic" speech online is first defining what precisely counts as "toxic" and then determining how to remove such speech without infringing on people's ability to freely express their ideas.


Fighting Social Media Hate Speech With AI-Powered Bots

Forbes - Tech

Could AI-powered bots fight hate speech by flooding the internet with love? As social media platforms have become ever more intrinsic to how we live our lives and begun to evolve into the primary medium through which we communicate and listen to the rest of the world, their rise has handed a megaphone to the world's hate and vitriol. In fact, it was Twitter who initially stepped forward to staunchly defend the rights of terrorists and their sympathizers to communicate via its platform before abruptly reversing itself in the face of fierce public criticism. Yet, despite myriad programs and policies designed on paper to fight abuse, in reality the platforms have done very little to curb the spread of hate speech, harassment and violent threats. This raises the question of whether the rise of deep learning-powered "bots" could offer a powerful solution to online hate speech, by deploying them en masse to report, counter and overwhelm hateful posts in realtime. Over the last few years deep learning algorithms have made enormous advances in their ability to process human text and imagery at levels of sophistication and accuracy that approach human levels at times, while even simple ELIZA bots have managed to carry on fairly convincing chats for more than half a century.


How police use AI to hunt drug dealers on Instagram

#artificialintelligence

New York state's top cops want to use machine-learning algorithms to detect drug dealers on social media networks like Instagram, a trend that "has become a severe problem in recent years," according to researchers from the University of Rochester and the New York Attorney General's office. Using social media to sell drugs began years ago and continues to this day. Newer networks like Tinder have become especially popular with drug dealers because they offer both sellers and customers a deal in close proximity. All of the networks rely on manual user reports to remove the illegal content in what has largely been a losing battle. The New York Attorney General's office co-authored new research on algorithms meant to examine millions of Instagram posts, spotlight drug dealers, and only then pass the suspects on to human officers for further investigation.